Agreement between Two Independent Groups of Raters

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Kappa Test for Agreement Between Two Raters

Introduction This module computes power and sample size for the test of agreement between two raters using the kappa statistic. The power calculations are based on the results in Flack, Afifi, Lachenbruch, and Schouten (1988). Calculations are based on ratings for k categories from two raters or judges. You are able to vary category frequencies on a single run of the procedure to analyze a wide...

متن کامل

Delta: a new measure of agreement between two raters.

The most common measure of agreement for categorical data is the coefficient kappa. However, kappa performs poorly when the marginal distributions are very asymmetric, it is not easy to interpret, and its definition is based on hypothesis of independence of the responses (which is more restrictive than the hypothesis that kappa has a value of zero). This paper defines a new measure of agreement...

متن کامل

Powerful Exact Unconditional Tests for Agreement between Two Raters with Binary Endpoints

Asymptotic and exact conditional approaches have often been used for testing agreement between two raters with binary outcomes. The exact conditional approach is guaranteed to respect the test size as compared to the traditionally used asymptotic approach based on the standardized Cohen's kappa coefficient. An alternative to the conditional approach is an unconditional strategy which relaxes th...

متن کامل

Questionable Raters + Low Agreement + Inadequate Sampling

Several studies have made positive claims regarding the validity, reliability, and utility of the Occupational Information Network (O*NET). In this first of three studies questioning such claims, I focused on the root cause of many problems regarding O*NET: i.e., the practice of rating overly abstract and heterogeneous occupational units (OUs), collecting ratings on OUs that exhibit substantial...

متن کامل

A-Kappa: A measure of Agreement among Multiple Raters

Abstract: Medical data and biomedical studies are often imbalanced with a majority of observations coming from healthy or normal subjects. In the presence of such imbalances, agreement among multiple raters based on Fleiss’ Kappa (FK) produces counterintuitive results. Simulations suggest that the degree of FK’s misrepresentation of the observed agreement may be directly related to the degree o...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Psychometrika

سال: 2009

ISSN: 0033-3123,1860-0980

DOI: 10.1007/s11336-009-9116-1